Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add filters








Language
Year range
1.
Korean Journal of Medical Education ; : 193-204, 2019.
Article in English | WPRIM | ID: wpr-759895

ABSTRACT

PURPOSE: Assessment in different languages should measure the same construct. However, item characteristics, such as item flaws and content, may favor one test-taker group over another. This is known as item bias. Although some studies have focused on item bias, little is known about item bias and its association with items characteristics. Therefore, this study investigated the association between item characteristics and bias. METHODS: The University of Groningen offers both an international and a national bachelor’s program in medicine. Students in both programs take the same progress test, but the international progress test is literally translated into English from the Dutch version. Differential item functioning was calculated to analyze item bias in four subsequent progress tests. Items were also classified by their categories, number of alternatives, item flaw, item length, and whether it was a case-based question. RESULTS: The proportion of items with bias ranged from 34% to 36% for the various tests. The number of items and the size of their bias was very similar in both programmes. We have identified that the more complex items with more alternatives favored the national students, whereas shorter items and fewer alternatives favored the international students. CONCLUSION: Although nearly 35% of all items contain bias, the distribution and the size of the bias were similar for both groups. The findings of this paper may be used to improve the writing process of the items, by avoiding some characteristics that may benefit one group whilst being a disadvantage for others.


Subject(s)
Humans , Bias , Education, Medical , Educational Measurement , Writing
2.
Korean Journal of Medical Education ; : 193-204, 2019.
Article in English | WPRIM | ID: wpr-917873

ABSTRACT

PURPOSE@#Assessment in different languages should measure the same construct. However, item characteristics, such as item flaws and content, may favor one test-taker group over another. This is known as item bias. Although some studies have focused on item bias, little is known about item bias and its association with items characteristics. Therefore, this study investigated the association between item characteristics and bias.@*METHODS@#The University of Groningen offers both an international and a national bachelor’s program in medicine. Students in both programs take the same progress test, but the international progress test is literally translated into English from the Dutch version. Differential item functioning was calculated to analyze item bias in four subsequent progress tests. Items were also classified by their categories, number of alternatives, item flaw, item length, and whether it was a case-based question.@*RESULTS@#The proportion of items with bias ranged from 34% to 36% for the various tests. The number of items and the size of their bias was very similar in both programmes. We have identified that the more complex items with more alternatives favored the national students, whereas shorter items and fewer alternatives favored the international students.@*CONCLUSION@#Although nearly 35% of all items contain bias, the distribution and the size of the bias were similar for both groups. The findings of this paper may be used to improve the writing process of the items, by avoiding some characteristics that may benefit one group whilst being a disadvantage for others.

3.
Journal of Educational Evaluation for Health Professions ; : 28-2018.
Article in English | WPRIM | ID: wpr-764450

ABSTRACT

PURPOSE: It is assumed that case-based questions require higher-order cognitive processing, whereas questions that are not case-based require lower-order cognitive processing. In this study, we investigated to what extent case-based and non-case-based questions followed this assumption based on Bloom's taxonomy. METHODS: In this article, 4,800 questions from the Interuniversity Progress Test of Medicine were classified based on whether they were case-based and on the level of Bloom's taxonomy that they involved. Lower-order questions require students to remember or/and have a basic understanding of knowledge. Higher-order questions require students to apply, analyze, or/and evaluate. The phi coefficient was calculated to investigate the relationship between whether questions were case-based and the required level of cognitive processing. RESULTS: Our results demonstrated that 98.1% of case-based questions required higher-level cognitive processing. Of the non-case-based questions, 33.7% required higher-level cognitive processing. The phi coefficient demonstrated a significant, but moderate correlation between the presence of a patient case in a question and its required level of cognitive processing (phi coefficient= 0.55, P< 0.001). CONCLUSION: Medical instructors should be aware of the association between item format (case-based versus non-case-based) and the cognitive processes they elicit in order to meet the desired balance in a test, taking the learning objectives and the test difficulty into account.


Subject(s)
Humans , Classification , Education, Medical , Educational Measurement , Learning , Netherlands
4.
Journal of Educational Evaluation for Health Professions ; : 28-2018.
Article in English | WPRIM | ID: wpr-937859

ABSTRACT

PURPOSE@#It is assumed that case-based questions require higher-order cognitive processing, whereas questions that are not case-based require lower-order cognitive processing. In this study, we investigated to what extent case-based and non-case-based questions followed this assumption based on Bloom's taxonomy.@*METHODS@#In this article, 4,800 questions from the Interuniversity Progress Test of Medicine were classified based on whether they were case-based and on the level of Bloom's taxonomy that they involved. Lower-order questions require students to remember or/and have a basic understanding of knowledge. Higher-order questions require students to apply, analyze, or/and evaluate. The phi coefficient was calculated to investigate the relationship between whether questions were case-based and the required level of cognitive processing.@*RESULTS@#Our results demonstrated that 98.1% of case-based questions required higher-level cognitive processing. Of the non-case-based questions, 33.7% required higher-level cognitive processing. The phi coefficient demonstrated a significant, but moderate correlation between the presence of a patient case in a question and its required level of cognitive processing (phi coefficient= 0.55, P< 0.001).@*CONCLUSION@#Medical instructors should be aware of the association between item format (case-based versus non-case-based) and the cognitive processes they elicit in order to meet the desired balance in a test, taking the learning objectives and the test difficulty into account.

SELECTION OF CITATIONS
SEARCH DETAIL